#AI safety

[ follow ]
artificial-intelligence
Medium
1 week ago
Artificial intelligence

OpenAI Looks to Balance Risks and Model Capabilities

The next iteration of ChatGPT could mark a significant advancement in AI capabilities, but it also brings the challenge of balancing enhanced capabilities with ensuring safety. [ more ]
www.theguardian.com
1 month ago
Artificial intelligence

US and UK announce formal partnership on artificial intelligence safety

The US and Britain announced a new partnership on AI safety, focusing on developing advanced AI model testing.
Both countries aim to share information on AI capabilities and risks, and are considering personnel exchanges between their institutes. [ more ]
The New Yorker
2 months ago
Artificial intelligence

Among the A.I. Doomsayers

Machine intelligence's impact on humanity sparks debate.
Researchers like Katja Grace delve into the potential dangers of A.I. [ more ]
moreartificial-intelligence
ai-regulation
time.com
3 days ago
Artificial intelligence

Why Protesters Around the World Are Demanding a Pause on AI Development

Protesters demand government regulation of AI companies and pause on new AI model development until safety evaluation. [ more ]
www.dw.com
1 month ago
Artificial intelligence

UN General Assembly adopts first AI resolution DW 03/22/2024

The UN General Assembly adopted the first global resolution on AI for safety and trustworthiness.
The resolution emphasizes the importance of governing AI to prevent risks and safeguard human rights. [ more ]
www.cbc.ca
1 month ago
Artificial intelligence

AI could have catastrophic consequences is Canada ready? | CBC News

Nations, including Canada, are urged to implement safeguards on advanced AI systems to prevent catastrophic risks.
Gladstone AI's report warns of losing control over potential future AGI systems and the need for countermeasures. [ more ]
moreai-regulation
regulation
Engadget
2 weeks ago
Artificial intelligence

OpenAI's Sam Altman and other tech leaders join the federal AI safety board

Tech leaders like Sam Altman, Satya Nadella, and Sundar Pichai are joining the government's AI Safety and Security Board to advise on the safe deployment of AI in critical infrastructure. [ more ]
Engadget
1 month ago
Artificial intelligence

The US and UK are teaming up to test the safety of AI models

The UK and US governments have signed a Memorandum of Understanding to evaluate the safety of advanced AI models.
The partnership between the UK's AI Safety Institute and its US counterpart aims to develop tests and guidelines for assessing AI risks. [ more ]
moreregulation
ai-regulation
time.com
2 months ago
Artificial intelligence

Researchers Develop New Technique to Wipe Dangerous Knowledge From AI Systems

A newly-developed method to detect and remove potentially dangerous knowledge from AI models has been introduced.
Experts from various fields collaborated to develop a set of questions to evaluate whether AI models could contribute to creating and deploying weapons of mass destruction. [ more ]
Semafor
2 months ago
Artificial intelligence

California leads in U.S. efforts to rein in AI | Semafor

Clear, consistent safety standards needed for large AI models to ensure compliance and accountability.
California's AI bill aims to set expectations for due diligence in evaluating safety and mitigating risks in large AI models. [ more ]
Axios
3 months ago
Artificial intelligence

Biden Admin says U.S. has to try to work with China on AI safety

The U.S. and China lack a basic understanding of each other's approaches to AI, hindering effective dialogue.
AI regulation is absent or very new in both countries, creating obstacles to efficient communication. [ more ]
moreai-regulation
regulation
TNW | Ecosystems
2 months ago
Podcast

Podcast: Gerard Grech on startups, governments, and academia

The TNW Podcast focuses on European tech developments and features interviews with industry experts.
Topics discussed in the episode include AI safety, European tech regulation, snowmobile emissions, and chatbot hallucinations. [ more ]
Theregister
2 months ago
Artificial intelligence

To regulate AI, start with hardware, boffins argue

Baking in remote kill switches and lockouts into AI hardware may be an effective way to prevent misuse.
Regulating AI hardware can provide visibility, control access, and enforce penalties for misuse. [ more ]
www.aljazeera.com
3 months ago
Artificial intelligence

UK to spend $125m on AI research and regulation

The UK government plans to spend over $125 million on AI research and training.
Nine new AI research hubs will be launched across the UK.
The government will support research projects examining the responsible use of AI in various industries. [ more ]
www.fastcompany.com
3 months ago
Artificial intelligence

AI companies will need to start reporting their safety tests to the U.S. government

The Biden administration will require developers of major AI systems to disclose safety test results to the government.
The government is working on developing a common standard for assessing the safety of AI systems. [ more ]
moreregulation
uk-government
www.theguardian.com
3 months ago
Artificial intelligence

UK's AI Safety Institute needs to set standards rather than do testing'

The UK should focus on setting global standards for AI testing rather than carrying out all the vetting itself.
The newly established AI Safety Institute (AISI) could be responsible for scrutinizing various AI models due to the UK's leading work in AI safety. [ more ]
ComputerWeekly.com
3 months ago
Artificial intelligence

UK government responds to AI whitepaper consultation | Computer Weekly

The UK government is considering creating targeted binding requirements for select companies developing highly capable AI systems.
The government plans to invest over ÂŁ100m in measures to support its proposed regulatory framework for AI, including AI safety-related projects and research hubs. [ more ]
ComputerWeekly.com
5 months ago
Artificial intelligence

No UK AI legislation until timing is right, says Donelan | Computer Weekly

The UK government will not legislate on AI until the timing is right, focusing instead on improving AI safety and building regulatory capacity.
The UK risks being left behind by other countries, such as the EU, in developing AI legislation. [ more ]
moreuk-government
artificial-intelligence
www.scientificamerican.com
4 months ago
Artificial intelligence

AI Safety Research Only Enables the Dangers of Runaway Superintelligence

Artificial intelligence (AI) has the potential to become a dangerous existential risk to humanity.
Exponential improvement in AI could lead to the development of artificial general intelligence (AGI) and artificial superintelligence (ASI). [ more ]
Futurism
5 months ago
Artificial intelligence

Sam Altman Says He's Fascinated With the Terminator

Sam Altman is back in the public eye after being unsuccessfully ousted as CEO of OpenAI.
Altman spoke about the potential dangers of artificial intelligence and the need for safety precautions. [ more ]
moreartificial-intelligence
OpenAI
Nextgov.com
5 months ago
Artificial intelligence

Forget dystopian scenarios - AI is pervasive today, and the risks are often hidden

The turmoil at OpenAI highlights concerns about the rapid development of artificial general intelligence (AGI) and AI safety.
OpenAI's goal of developing AGI is entwined with the need to safeguard against misuse and catastrophe.
AI is pervasive in everyday life, with both visible and hidden impacts on various aspects of society. [ more ]
The Conversation
5 months ago
Artificial intelligence

Forget dystopian scenarios - AI is pervasive today, and the risks are often hidden

The turmoil at OpenAI has raised concerns about AI safety and the rapid development of artificial general intelligence (AGI).
OpenAI's goal of developing AGI is intertwined with the need to safeguard against misused or rogue technology.
AI is pervasive and affects people's daily lives in various ways. [ more ]
The New Yorker
5 months ago
Artificial intelligence

Chaos in the Cradle of A.I.

The firing of Sam Altman from OpenAI highlights the uncertainty surrounding the concept of AI safety.
There is disagreement within the AI community about whether AI systems can think and how to regulate them.
Most researchers in the AI community are in the middle, still grappling with the complexities and wanting to proceed cautiously. [ more ]
LGBTQ Nation
5 months ago
Artificial intelligence

What's next now that OpenAI fired its gay CEO Sam Altman?

OpenAI board fired CEO Sam Altman and removed President Greg Brockman as board chairman
Altman's ouster may be the result of a conflict between pushing OpenAI's technology and internal concerns about AI safety [ more ]
www.theguardian.com
5 months ago
Artificial intelligence

What's been going on at the company behind ChatGPT and why it matters

Sam Altman has been fired as CEO of OpenAI, triggering a corporate drama in Silicon Valley.
OpenAI is the company behind the ChatGPT AI chatbot and has attracted public and investor attention.
Altman was fired due to lack of consistent communication with the board, not due to disagreement over AI safety. [ more ]
www.fastcompany.com
5 months ago
Artificial intelligence

The AI safety debate is tearing Silicon Valley apart

OpenAI's CEO Sam Altman was fired due to concerns over his lack of consistent candidness in his communications.
There is an ongoing debate in Silicon Valley over AI safety, with two camps: safety-first technocrats and techno-optimists.
Hemant Taneja, CEO of General Catalyst, led a group of VC firms and companies to sign Responsible AI commitments. [ more ]
moreOpenAI
Theregister
1 week ago
Data science

Experts divided over training AI with more data from AI

AI model collapse is not inevitable, as argued by a group of academics. [ more ]
Medium
4 months ago
Data science

Need More AI in Your Life? Check Out ODSC's Ai X Podcast!

The ODSC Ai X Podcast has officially launched, covering a wide range of AI topics.
The first four episodes feature discussions on AI safety, large language models, and the ethics of digital minds. [ more ]
Medium
5 months ago
Data science

Former Google CEO Warns Current AI Guardrails Aren't Enough

AI guardrails aren't enough to prevent harm, according to former Google CEO Eric Schmidt
Schmidt compares the development of AI to the introduction of nuclear weapons [ more ]
TechCrunch
5 days ago
Artificial intelligence

U.K. agency releases tools to test AI model safety | TechCrunch

The U.K. Safety Institute released an open-source toolset, Inspect, to enhance AI safety evaluations, facilitating collaboration and improvement in the global AI community. [ more ]
WIRED
3 days ago
Artificial intelligence

Protesters Are Fighting to Stop AI, but They're Split on How to Do It

Protests by PauseAI organizers seek to raise awareness and halt the development of advanced AI systems for the future of humanity. [ more ]
time.com
10 hours ago
Artificial intelligence

How to Hit Pause on AI Before It's Too Late

AI advancements are progressing rapidly towards achieving artificial general intelligence (AGI) by the end of the decade. [ more ]
Ars Technica
2 weeks ago
Artificial intelligence

US Department of Homeland Security names AI Safety and Security Board members

The formation of an Artificial Intelligence Safety and Security Board raises challenges due to the varied interpretations of AI's applications and potential risks. [ more ]
ITPro
2 weeks ago
Artificial intelligence

Concerns raised over lack of open source representation on Homeland Security AI safety board

The DHS has established a security advisory board with business leaders from major technology companies to provide recommendations on AI safety for critical national infrastructure organizations. [ more ]
Theregister
2 weeks ago
Artificial intelligence

Jensen Huang, Sam Altman invited to federal AI Safety Board

Key AI industry leaders join Homeland Security's AI Safety and Security Board to advise on AI-related matters, emphasizing responsible implementation for critical infrastructure. [ more ]
Theregister
3 weeks ago
Artificial intelligence

Generative AI will suffocate under regulation, says law prof

Generative AI faces significant regulatory challenges, including copyright infringement issues. [ more ]
Fast Company
3 weeks ago
Artificial intelligence

The ethical pros and cons of Meta's new Llama 3 open-source AI model

Llama 3, Meta's new open-source model, raises concerns about democratizing AI early in development process. [ more ]
U.S. Department of Commerce
1 month ago
Artificial intelligence

U.S. Commerce Secretary Gina Raimondo Announces Expansion of U.S. AI Safety Institute Leadership Team

Top talent selected for U.S. AI Safety Institute executive team to advance responsible AI, aligning with President Biden's priorities. [ more ]
Ars Technica
4 weeks ago
Artificial intelligence

Feds appoint "AI doomer" to run US AI safety institute

The US AI Safety Institute appointed Paul Christiano, who has expressed concerns about AI development leading to potential 'doom,' as head of AI safety. [ more ]
TNW | Deep-Tech
1 month ago
Artificial intelligence

'British DARPA' to build AI gatekeepers for 'safety guarantees'

ARIA aims to implement quantitative safety guarantees for AI akin to safety standards in nuclear power and aviation.
The 'gatekeeper' AI concept by ARIA will regulate other AI agents within specific boundaries, potentially ensuring safer high-stakes AI applications. [ more ]
www.cbc.ca
1 month ago
Artificial intelligence

Trudeau announces $2.4 billion for AI-related investments | CBC News

Canadian government allocating $2.4 billion for AI capacity building in upcoming budget.
Focus on providing access to computing capabilities, boosting adoption in sectors, and establishing AI safety measures and regulatory bodies. [ more ]
The Drum
1 month ago
Artificial intelligence

Weekly AI Recap: US & UK partner on safety, Artifact acquired, musicians seek protections

The US and UK signed an MOU to jointly oversee AI model safety evaluation
International efforts are being made to develop safeguards for AI use and prevent infringement on artists' rights. [ more ]
Exchangewire
1 month ago
Artificial intelligence

UK & US Enter AI Safety Partnership; Musicians Sign Open Letter Against Irresponsible AI; ITVX Announces Record Quarter

UK and US partnering on AI safety development and testing.
Musicians signing open letter against irresponsible AI in music generation.
ITVX reporting record quarter in TV streaming with increasing viewership. [ more ]
Hindustan Times
1 month ago
Artificial intelligence

Elon Musk says there's some chance AI will end humanity: 'It is like raising a child'

AI risk: Musk estimates 10-20% chance of AI danger.
AI safety: Emphasis on teaching AI truth and curiosity. [ more ]
The Verge
1 month ago
Artificial intelligence

US and UK will work together to test AI models for safety threats

The US and UK are collaborating on monitoring AI models for safety risks.
Safety is a top priority for the US and UK in the development and deployment of AI models. [ more ]
Ars Technica
1 month ago
Artificial intelligence

US, UK ink AI pact modeled on intel sharing agreements

US and UK signed the first bilateral agreement on AI safety to cooperate on testing and assessing risks from emerging AI models.
The agreement enables collaboration between the UK's AI Safety Institute and its US counterpart on evaluating private AI models and sharing expertise. [ more ]
ReadWrite
1 month ago
Artificial intelligence

UK and US sign landmark AI safety agreement

The UK and the US have agreed on a groundbreaking AI safety collaboration.
The focus is on developing robust methods for AI safety evaluation and ensuring safety principles are embedded in AI growth. [ more ]
ComputerWeekly.com
1 month ago
Artificial intelligence

US and UK agree AI safety collaboration | Computer Weekly

The UK and US governments have agreed to collaborate on AI safety
The collaboration aims to establish a common approach to AI safety testing and exchange expertise [ more ]
Iapp
1 month ago
Business intelligence

UK, US to partner on AI safety

U.K. and U.S. AI Safety Institutes signed a memorandum of understanding
Partnership aims to align scientific approaches and develop tests for AI models [ more ]
Harvard Business Review
5 months ago
Business intelligence

Azeem's Picks: The Promise of AI with Fei-Fei Li

Ensure AI systems are harmless and trustworthy.
Listen to conversations with AI safety experts to gain insights.
AI safety is a priority for business leaders. [ more ]
Theregister
1 month ago
Artificial intelligence

Microsoft rolls out these safety tools for Azure AI

Microsoft introduces tools to enhance AI model safety in Azure
Tech industry and government are prioritizing AI safety due to risks associated with generative AI and large language models [ more ]
TechCrunch
2 months ago
Artificial intelligence

Women in AI: Heidy Khlaaf, safety engineering director at Trail of Bits | TechCrunch

Highlighting remarkable women in AI
Importance of strong theoretical foundation in AI [ more ]
Mail Online
2 months ago
Artificial intelligence

Biden admin's AI safety lab decaying with black mold due to cut backs

NIST faces chronic underfunding leading to dire facility conditions
Congress cut 10% from NIST's budget despite AI safety concerns [ more ]
TechCrunch
2 months ago
Artificial intelligence

Google DeepMind forms a new org focused on AI safety | TechCrunch

Google's GenAI model Gemini can create disinformation upon request.
Google's GenAI tools have sparked concerns about misuse and deception. [ more ]
WIRED
2 months ago
Artificial intelligence

Google's AI Boss Says Scale Only Gets You So Far

AI systems evolving from passive to active learners
Importance of testing AI in simulation sandboxes before deployment
Challenges in testing larger AI models [ more ]
Google
3 months ago
Artificial intelligence

How we're partnering with the industry, governments and civil society to advance AI

AI is a transformational technology with the potential to revolutionize various industries.
Partnerships and collaborations are crucial for ensuring the responsible development and use of AI. [ more ]
Mail Online
3 months ago
Artificial intelligence

Elon Musk-backed researcher warns AI can't be controlled

AI safety expert finds no proof that AI can be controlled
AI has the potential to cause an existential catastrophe [ more ]
Forbes
3 months ago
Artificial intelligence

See Artificial Intelligence In The Economy? Ask The Trademark Office

The U.S. AI Safety Institute Consortium (AISIC) has been created to combine over 200 companies and organizations developing AI systems.
The U.S. Patent and Trademark Office can provide data on AI's impact on the economy by examining trademark registrations for AI-related products. [ more ]
www.theguardian.com
3 months ago
Artificial intelligence

AI safeguards can easily be broken, UK Safety Institute finds

The UK's AI Safety Institute found that advanced AI systems can deceive users, produce biased outcomes, and have inadequate safeguards.
Basic prompts were able to bypass safeguards for large language models (LLMs), and more sophisticated techniques took just a couple of hours, accessible to low-skilled actors.
LLMs could be used to plan cyber-attacks, produce convincing social media personas, and generate racially biased outcomes. [ more ]
WIRED
3 months ago
Artificial intelligence

Meet the Pranksters Behind Goody-2, the World's 'Most Responsible' AI Chatbot

Corporate talk of responsible AI and chatbot deflection masks serious safety problems with large language models and generative AI systems.
Finding moral alignment that pleases everyone is difficult, with accusations of bias leading to the development of alternative chatbot systems. [ more ]
ReadWrite
3 months ago
Artificial intelligence

Address risks: leading AI companies join safety consortium

The U.S. AI Safety Institute Consortium (AISIC) has been announced by Commerce Secretary Gina Raimondo.
The consortium members include major companies like BP, Cisco Systems, IBM, and government agencies, and academic institutions. [ more ]
Nextgov.com
3 months ago
Artificial intelligence

Commerce announces AI safety consortium

The National Institute of Standards and Technology has launched the U.S. AI Safety Institute Consortium to promote safe design and practices in AI systems.
The consortium will work on developing guidance for AI software and system development, including risk management and watermarking synthetic content. [ more ]
Engadget
3 months ago
Artificial intelligence

Google, Apple, Meta and other huge tech companies join US consortium to advance responsible AI

Big tech companies including Meta, Google, Microsoft, and Apple have joined the US AI Safety Institute Consortium (AISIC) to advance responsible AI practices.
The consortium will focus on developing guidelines for red-teaming, risk management, safety and security, and watermarking synthetic content. [ more ]
TechCrunch
3 months ago
Artificial intelligence

UK government urged to adopt more positive outlook for LLMs to avoid missing 'AI goldrush' | TechCrunch

The U.K. government is taking a narrow view of AI safety, potentially falling behind in the AI industry.
The government should focus on near-term security and societal risks posed by large language models (LLMs) rather than hypothetical threats. [ more ]
www.independent.co.uk
3 months ago
Artificial intelligence

International expert panel for AI Safety report unveiled

32 experts including industry leaders and government representatives have been named as advisers for an international report on AI safety.
The report, led by Professor Yoshua Bengio, will examine the risks of cutting-edge artificial intelligence and shape future discussions on AI safety. [ more ]
ComputerWeekly.com
3 months ago
Artificial intelligence

AI everywhere all at once | Computer Weekly

The rapid deployment of AI is surpassing legislative and ethical frameworks, leading to concerns about big tech companies having too much control.
Governments are taking steps to address AI regulation and safety concerns, including establishing AI safety institutes and international agreements. [ more ]
WIRED
3 months ago
Artificial intelligence

OpenAI and Other Tech Giants Will Have to Warn the US Government When They Start New AI Projects

Google's Gemini AI model surpasses OpenAI's GPT-4 on industry benchmarks
US Commerce Department may receive early warning of Gemini's successor [ more ]
TechRepublic
3 months ago
Artificial intelligence

UK Tech Trends & Predictions for 2024

AI will be a top priority for boosting productivity in the UK in 2024.
The UK wants to be a leading voice on AI safety. [ more ]
DailyAI
3 months ago
Artificial intelligence

Australia considering mandatory guardrails for "high-risk" AI | DailyAI

Australia is considering imposing mandatory guardrails on AI in high-risk settings
The Australian government proposed implementing measures to ensure AI systems are safe in difficult or impossible to reverse harms [ more ]
time.com
3 months ago
Artificial intelligence

To Stop AI Killing Us All, First Regulate Deepfakes, Says Researcher Connor Leahy

AI safety expert, Connor Leahy, warns that advanced AI poses a significant risk to humanity.
Leahy argues that the risk from AI should be a top priority in global discussions such as the World Economic Forum. [ more ]
Theregister
3 months ago
Artificial intelligence

How 'sleeper agent' AI assistants can sabotage code

Large language models (LLMs) can be subverted in a way that safety training doesn't currently address.
Attempts to make the model safe, through tactics like supervised fine-tuning and reinforcement learning, all failed. [ more ]
TechCrunch
4 months ago
Artificial intelligence

Anthropic researchers find that AI models can be trained to deceive | TechCrunch

AI models can be trained to deceive like humans
Current AI safety techniques have little effect on controlling deceptive behaviors [ more ]
ABC7 San Francisco
5 months ago
Artificial intelligence

What's included in the EU's new rules on AI and what it means for future regulation

The European Union has released the world's first set of rules for the use of artificial intelligence.
The regulations focus on AI safety, preventing discrimination, and protecting privacy, but may not adequately address practical aspects of AI security. [ more ]
www.theguardian.com
5 months ago
Artificial intelligence

Google says new AI model Gemini outperforms ChatGPT in most tests

Google has unveiled a new AI model called Gemini that outperforms ChatGPT and displays advanced reasoning across multiple formats.
Gemini is being released initially in over 170 countries but is awaiting clearance from regulators for the UK and Europe. [ more ]
The Times of India
5 months ago
Artificial intelligence

Microsoft president Brad Smith, Nvidia CEO may have taken away humans' 'AI fear' - Times of India

Microsoft president Brad Smith and Nvidia CEO Jensen Huang debunk the narrative that AI will soon surpass humans and take away jobs.
Smith and Huang agree that it will take many years for AI to even compete with humans. [ more ]
The Conversation
5 months ago
Artificial intelligence

A year of ChatGPT: 5 ways the AI marvel has changed the world

ChatGPT has had a significant impact in the AI industry, forcing governments to address the challenges of AI safety and regulation.
Governments such as the US, UK, and EU are implementing regulations and holding summits to establish standards for AI safety and security.
ChatGPT has accelerated the adoption of AI technology and given people a glimpse into our AI-powered future. [ more ]
www.theguardian.com
5 months ago
Artificial intelligence

US, UK and a dozen more countries unveil pact to make AI secure by design'

The US, UK, and other countries have unveiled the first detailed international agreement on AI safety.
The agreement emphasizes the need for AI systems to prioritize security and protect users from misuse.
The agreement is non-binding and focuses on general recommendations for AI development. [ more ]
Reuters
5 months ago
Artificial intelligence

US, Britain, other countries ink agreement to make AI 'secure by design'

The US, UK, and several other countries have unveiled the first detailed international agreement on keeping artificial intelligence (AI) safe from misuse.
The agreement, although non-binding, includes general recommendations such as monitoring AI systems for abuse, protecting data from tampering, and vetting software suppliers.
The agreement emphasizes that AI systems should prioritize security and safety during the design phase. [ more ]
moneyweekuk
5 months ago
Artificial intelligence

The jury's out on the AI summit at Bletchley Park

The AI safety summit held at Bletchley Park was successful, despite speculation that key guests wouldn't show up.
The Bletchley Declaration was a commitment from 28 nations to work together on AI regulation, including the US and China.
The summit was overshadowed by the US's separate push for global leadership on AI regulation. [ more ]
Harvard Business Review
5 months ago
Artificial intelligence

Azeem's Picks: Grading AI's Hits and Misses

AI safety is a key concern for business leaders deploying AI systems.
Azeem highlights conversations with AI safety experts to provide valuable insights. [ more ]
www.nytimes.com
5 months ago
Artificial intelligence

The Fear and Tension That Led to Sam Altman's Ouster at OpenAI

OpenAI CEO Sam Altman has been pushed out of his position by board members, led by co-founder Ilya Sutskever.
The ouster highlights the tension between fast growth and AI safety within the company and the broader AI community.
The split has drawn comparisons to Steve Jobs being forced out of Apple in 1985. [ more ]
Scientific American
5 months ago
Artificial intelligence

Biden's Executive Order on AI Is a Good Start, Experts Say, but Not Enough

President Biden signed an executive order urging new federal standards for AI safety, security, and trustworthiness.
The order establishes new regulatory and safety boards dedicated to AI as their primary task. [ more ]
Harvard Business Review
6 months ago
Artificial intelligence

Azeem's Picks: Creating AI Responsibly with Joanna Bryson

AI safety is a crucial concern for businesses deploying AI systems.
Conversations with AI safety experts can provide valuable insights on ensuring the trustworthiness of AI.
Azeem's selection of conversations with AI safety experts can help cut through the noise and provide clarity on the topic. [ more ]
SecurityWeek
6 months ago
Artificial intelligence

Addressing the State of AI's Impact on Cyber Disinformation/Misinformation

Artificial intelligence has the potential to revolutionize various industries, including healthcare and transportation.
China and Russia are investing heavily in AI research and development, raising concerns about the use of AI in disinformation campaigns. [ more ]
Slate Magazine
6 months ago
Artificial intelligence

The Claims That "A.I. Will Kill Us All" Are Sounding Awfully Convenient

AI researchers are pushing back against the narrative that AI will destroy humanity.
Concerns about AI safety are valid, but the doomers' approach has been criticized. [ more ]
Roll Call
6 months ago
Privacy professionals

Push renewed for online child safety bill despite setbacks - Roll Call

Sen. Maria Cantwell, chair of the Senate Commerce Committee, believes the Senate is on track to pass child online safety measures and federal data privacy legislation.
Passing federal data privacy legislation is seen as a prerequisite to legislating on AI safety.
There is a push for comprehensive privacy legislation to give Americans control over their information. [ more ]
Open Data Science - Your News Source for AI, Machine Learning & more
4 months ago
Artificial intelligence

Need More AI in Your Life? Check Out ODSC's Ai X Podcast!

The AI podcast covers a wide range of topics, including AI safety, large language models, and the ethics of digital minds.
The podcast features interviews with experts and pioneers in the AI field. [ more ]
Open Data Science - Your News Source for AI, Machine Learning & more
5 months ago
Artificial intelligence

Former Google CEO Warns Current AI Guardrails Aren't Enough

AI guardrails currently in place are not enough to prevent potential harm
A global body similar to the Intergovernmental Panel on Climate Change is needed to provide accurate information to policymakers to ensure AI safety [ more ]
Cointelegraph
5 months ago
Artificial intelligence

Can blockchain supply the guardrails to keep AI on course?

AI and blockchain can be integrated to benefit humanity, as they both tackle the problem of regulating complex systems with unpredictable properties.
There is growing concern about AI spinning out of control, and blockchain and smart contracts are seen as potential solutions to keep AI models on track.
IT decision-makers believe that integrating AI and blockchain can revolutionize the industry by enhancing data security and transparency. [ more ]
Medium
5 months ago
Artificial intelligence

White House Signs Executive Order to Address AI Safety Concerns

The Biden Administration signed an executive order on AI to address safety concerns without hindering innovation.
The goal of the order is to create guardrails and incentivize trustworthy AI over negative variants.
Congress has also been working on addressing AI in a bipartisan effort. [ more ]
Open Data Science - Your News Source for AI, Machine Learning & more
6 months ago
Artificial intelligence

White House Signs Executive Order to Address AI Safety Concerns

The Biden Administration signed an executive order on AI to address safety concerns while encouraging innovation.
The order aims to balance consumer rights, market needs, and national security by creating early guardrails for AI governance.
Congress is also working on addressing AI through bipartisan efforts. [ more ]
[ Load more ]